Meter analysis
Contents
Meter analysis#
## Importing compiam to the project
import compiam
# Import extras and supress warnings to keep the tutorial clean
import os
from pprint import pprint
import warnings
warnings.filterwarnings('ignore')
Carnatic and Hindustani Music Rhyhtm datasets#
That is a precise moment to introduce the (CompMusic) Carnatic and Hindustani Rhythm Datasets. These datasets, which include audio recordings, musically-relevant metadata, and beat and meter annotations, can be downloaded from Zenodo under request and used through the mirdata dataloaders available from release 0.3.7.
Let’s initialise an instance of these datasets and browse through the available data.
cmr = compiam.load_dataset(
"compmusic_carnatic_rhythm",
data_home=os.path.join("../audio/mir_datasets"),
version="full_dataset"
)
cmr
The compmusic_carnatic_rhythm dataset
----------------------------------------------------------------------------------------------------
Call the .cite method for bibtex citations.
----------------------------------------------------------------------------------------------------
CompMusic Carnatic Music Rhythm class
Args:
track_id (str): track id of the track
data_home (str): Local path where the dataset is stored. default=None
If `None`, looks for the data in the default directory, `~/mir_datasets`
Attributes:
audio_path (str): path to audio file
beats_path (srt): path to beats file
meter_path (srt): path to meter file
Cached Properties:
beats (BeatData): beats annotation
meter (string): meter annotation
mbid (string): MusicBrainz ID
name (string): name of the recording in the dataset
artist (string): artists name
release (string): release name
lead_instrument_code (string): code for the load instrument
taala (string): taala annotation
raaga (string): raaga annotation
num_of_beats (int): number of beats in annotation
num_of_samas (int): number of samas in annotation
----------------------------------------------------------------------------------------------------
For showcasing purposes, we include a single-track version of the (CompMusic) Carnatic Rhythm Dataset within the materials of this tutorial. This dataset is private, but can be requested and downloaded for research purposes.
Reading through the dataset details we observe the available data for the tracks in the dataloader. We will load the tracks and select the specific track we have selected for tutoring purposes. Bear in mind that you can access the list of identifiers in the dataloader with the .track_ids attribute.
track = cmr.load_tracks()["10001"]
Let’s print out the available annotations for this example track.
track.beats
BeatData(confidence, confidence_unit, position_unit, positions, time_unit, times)
BeatData is an annotation type that is used in mirata to store information related to rhythmic beats. We will print the time-steps for the first 20 annotated beats.
track.beats.times[:20]
array([ 0.627846, 1.443197, 2.184898, 2.944127, 3.732721, 4.476644,
5.293333, 6.078821, 6.825918, 7.639819, 8.42873 , 9.201905,
10.168209, 10.982404, 11.784082, 12.55551 , 13.325556, 14.09551 ,
14.858866, 15.585941])
Let’s also observe that we have the actual magnitude of these annotations clear.
track.beats.time_unit
's'
We can also observe the poisitons in the cycle (if available) that each of the beats occupy.
track.beats.positions[:20]
array([1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4])
mirdata annotations have been created aiming at providing standardized and useful data structures for MIR-related annotations. In this example we showcase the use of BeatData, but many more annotation types are included in mirdata. Make sure to check them out!.
Let’s now observe how the meter annotations looks like.
track.meter
'8/4'
Akshara pulse tracker#
We can now extract the beats (referred as akshara pulses within the context of Indian Art Music) from the input recording. Likewise the other extractors and models in compiam (if not indicated otherwise), the method for inference takes an audio path as input.
Let’s first listen to the audio example we are going to be using to showcase this tool.
import IPython.display as ipd
# Let's also listen to the input audio
ipd.Audio(
data=track.audio[0],
rate=track.audio[1]
)